imagenet roulette
Neural nets are just people all the way down
When I was in seventh grade, we had to take a class called home ec. Everyone brushed it off as a super easy class. "All you have to do is cook and sew," everyone said. One of our first projects, after learning how sewing machines work, was sewing a pair of pajama pants. You'd think it's a pretty simple process.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > California (0.05)
- Europe > Italy > Tuscany > Florence (0.04)
How would a Latino be classified by an Artificial Intelligence system?
We know all know that artificial intelligence (AI) and facial recognition are perfect tools to unlock your iPhone. The new technological systems are a novelty, however, what mortals don't understand is how policies are governed and created to categorize facial recognition through AI and its algorithms. Trevor Paglen and Kate Crawford, two artists who question the boundaries between science and ideology, created ImageNet Roulette, a database where the user can upload images and be tagged by an AI system to understand how this technology categorizes us. The results could be entertaining or really prejudiced, sexist or even racist. ImageNet Roulette was created to understand how human beings are classified by machine learning systems.
600,000 Images Removed from AI Database After Art Project Exposes Racist Bias
ImageNet will remove 600,000 images of people stored on its database after an art project exposed racial bias in the program's artificial intelligence system. Created in 2009 by researchers at Princeton and Stanford, the online image database has been widely used by machine learning projects. The program has pulled more than 14 million images from across the web, which have been categorized by Amazon Mechanical Turk workers -- a crowdsourcing platform through which people can earn money performing small tasks for third parties. According to the results of an online project by AI researcher Kate Crawford and artist Trevor Paglen, prejudices in that labor pool appear to have biased the machine learning data. Training Humans -- an exhibition that opened last week at the Prada Foundation in Milan -- unveiled the duo's findings to the public, but part of their experiment also lives online at ImageNet Roulette, a website where users can upload their own photographs to see how the database might categorize them.
- North America > United States (0.54)
- Asia > Middle East > Jordan (0.06)
See How AI Stereotypes You
Computers think they know who you are. Artificial intelligence algorithms can recognize objects from images, even faces. But we rarely get a peek under the hood of facial recognition algorithms. Now, with ImageNet Roulette, we can watch an AI jump to conclusions. Some of its guesses are funny, others…racist.
Artificial Intelligence based app ImageNet will classify you from selfie
As facial recognition software gets inevitable in everyday life, the developers behind a new internet app-slash-art job want to show people exactly how they look in the view of Artificial Intelligence –and the revelations are jarring. At first glance, ImageNet Roulette seems like just another viral selfie app. Want to understand what you'll look like in 30 years? There is an app for that. Would you be In the event that you were a dog what breed?
Find out How Artificial Intelligence Perceives You Through ImageNet Roulette
Thanks to artificial intelligence and facial recognition, you can unlock your phone merely by showing your face to your screen. The technology is impressive but what's less understood, however, is just how AI classifies you behind the scenes through its algorithms. Now you can find this out thanks to ImageNet Roulette, where you can upload images of yourself and be tagged as a specific type of person and can grasp an understanding of how AI categorizes us. The results are entertaining at times but sometimes they're rude and borderline racist. Created as part of an art exhibition -- Training Humans -- at the Prada Foundation museum in Milan, ImageNet Roulette was made to show us how we as humans are classified by computer systems or machine learning systems.
Fury as AI app gives racist labels and calls people a 'rape suspect'
A viral app which claims to'honestly' classify selfies using its in-built artificial intelligence has been spewing out vile and racist labels. Furious users say their pictures have been slapped with offensive and racist terms such as'negro', 'slant eye' and'rape suspect' by the app which was developed at Princeton University. Developers say causing offence was exactly the intention and it was intended to be deliberately provocative to draw attention to the in-built prejudice and discrimination in many forms of machine learning. But many users are still furious that their images have played a seemingly unwitting part in the controversial project. One MailOnline staffer who tried the app was dubbed a'rape suspect' when he uploaded his selfie.
- North America > United States > New York (0.05)
- North America > United States > Florida (0.05)
- Asia > Vietnam (0.05)
ImageNet Roulette
ImageNet is one of the most important and historically significant training sets in artificial intelligence. In the words of its creators, the idea behind ImageNet was to "map out the entire world of objects." After its initial launch in 2009, ImageNet grew enormous: the development team scraped a collection of many millions of images from the Internet and briefly became the world's largest academic user of Amazon's Mechanical Turk, using an army of piecemeal workers to sort an average of 50 images each minute into thousands of categories. When it was finished, ImageNet consisted of over 14 million labelled images organized into more than twenty thousand categories.
Viral App Highlights the Insensitive Logic of a System at the Heart of the Current AI Boom
The tool, called ImageNet Roulette, detects human faces in any uploaded photo and assign them labels using ImageNet, an academic training set with millions of pictures depicting almost anything imaginable, and WordNet, the corresponding text tags. As viral examples on Twitter have shown, the results of this process are more often than not completely useless--nonsensical at best and racist or otherwise offensive at worst. In some cases, it would label black men as "offenders" or "wrongdoers," while other times it would spit out racial slurs against Asians or outdated and offensive terms for black people. I might have a bad sense of humor but I don't think this particularly funny #imagenetroulette pic.twitter.com/RR578nhCOU The offensiveness was more or less the point, says co-creator, Kate Crawford, who is also a co-founder of New York University's AI Now Institute, which studies the social implications of artificial intelligence.
Fury as viral 'ImageNet' app gives racist labels and calls people a 'rape suspect'
A viral app which classifies selfies using its in-built artificial intelligence has been spewing out vile and racist labels and enraging users. Many took to social media to condemn the racist and offensive software but makers of the app, at Princeton University, say causing offence was exactly the intention. It was intended to be deliberately provocative to draw attention to the in-built prejudice and discrimination in many forms of machine learning. However, many users, it seems, didn't fully get the idea of the art project and were outraged at what their images were labelled as. One MailOnline staffer who tried the app was grotesquely dubbed a'rape suspect' from an innocuous picture.
- North America > United States > New York (0.05)
- North America > United States > Florida (0.05)
- Asia > Vietnam (0.05)